48 research outputs found

    Recursive Monte Carlo filters: Algorithms and theoretical analysis

    Full text link
    Recursive Monte Carlo filters, also called particle filters, are a powerful tool to perform computations in general state space models. We discuss and compare the accept--reject version with the more common sampling importance resampling version of the algorithm. In particular, we show how auxiliary variable methods and stratification can be used in the accept--reject version, and we compare different resampling techniques. In a second part, we show laws of large numbers and a central limit theorem for these Monte Carlo filters by simple induction arguments that need only weak conditions. We also show that, under stronger conditions, the required sample size is independent of the length of the observed series.Comment: Published at http://dx.doi.org/10.1214/009053605000000426 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Comment: The 2005 Neyman Lecture: Dynamic Indeterminism in Science

    Full text link
    Comment on ``The 2005 Neyman Lecture: Dynamic Indeterminism in Science'' [arXiv:0808.0620]Comment: Published in at http://dx.doi.org/10.1214/07-STS246B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Bridging the ensemble Kalman and particle filter

    Full text link
    In many applications of Monte Carlo nonlinear filtering, the propagation step is computationally expensive, and hence, the sample size is limited. With small sample sizes, the update step becomes crucial. Particle filtering suffers from the well-known problem of sample degeneracy. Ensemble Kalman filtering avoids this, at the expense of treating non-Gaussian features of the forecast distribution incorrectly. Here we introduce a procedure which makes a continuous transition indexed by gamma in [0,1] between the ensemble and the particle filter update. We propose automatic choices of the parameter gamma such that the update stays as close as possible to the particle filter update subject to avoiding degeneracy. In various examples, we show that this procedure leads to updates which are able to handle non-Gaussian features of the prediction sample even in high-dimensional situations

    Robust Methods for Credibility

    Get PDF
    Excess claims lead to an unsatisfactory behavior of standard linear credibility estimators. We suggest in this paper to use robust methods in order to obtain better estimators. Our first proposal is the linear credibility estimator with the claims replaced by a robust M-estimator of scale calculed from the claims. This corresponds to a truncation of the claims with a truncation point depending on the data and different for each contract. We discuss the properties of the robust M-estimator and present several examples. In order to improve the performance for a very small number of years, we propose a second estimator, which incorporates information from other claims into the M-estimato

    Intrinsic autoregressions and related models on the two-dimensional lattice

    Get PDF
    Stationary autoregressions on a two-dimensional lattice are generalized to intrinsic models where only increments are assumed to be stationary. Prediction formulae and the asymptotic behaviour of the semivariogram are derived. For parameter estimation we propose an approximate maximum likelihood estimator, a generalization of Whittle's estimator; it is derived also for general intrinsic model

    A dynamic nonstationary spatio-temporal model for short term prediction of precipitation

    Full text link
    Precipitation is a complex physical process that varies in space and time. Predictions and interpolations at unobserved times and/or locations help to solve important problems in many areas. In this paper, we present a hierarchical Bayesian model for spatio-temporal data and apply it to obtain short term predictions of rainfall. The model incorporates physical knowledge about the underlying processes that determine rainfall, such as advection, diffusion and convection. It is based on a temporal autoregressive convolution with spatially colored and temporally white innovations. By linking the advection parameter of the convolution kernel to an external wind vector, the model is temporally nonstationary. Further, it allows for nonseparable and anisotropic covariance structures. With the help of the Voronoi tessellation, we construct a natural parametrization, that is, space as well as time resolution consistent, for data lying on irregular grid points. In the application, the statistical model combines forecasts of three other meteorological variables obtained from a numerical weather prediction model with past precipitation observations. The model is then used to predict three-hourly precipitation over 24 hours. It performs better than a separable, stationary and isotropic version, and it performs comparably to a deterministic numerical weather prediction model for precipitation and has the advantage that it quantifies prediction uncertainty.Comment: Published in at http://dx.doi.org/10.1214/12-AOAS564 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A Conversation with Peter Huber

    Full text link
    Peter J. Huber was born on March 25, 1934, in Wohlen, a small town in the Swiss countryside. He obtained a diploma in mathematics in 1958 and a Ph.D. in mathematics in 1961, both from ETH Zurich. His thesis was in pure mathematics, but he then decided to go into statistics. He spent 1961--1963 as a postdoc at the statistics department in Berkeley where he wrote his first and most famous paper on robust statistics, ``Robust Estimation of a Location Parameter.'' After a position as a visiting professor at Cornell University, he became a full professor at ETH Zurich. He worked at ETH until 1978, interspersed by visiting positions at Cornell, Yale, Princeton and Harvard. After leaving ETH, he held professor positions at Harvard University 1978--1988, at MIT 1988--1992, and finally at the University of Bayreuth from 1992 until his retirement in 1999. He now lives in Klosters, a village in the Grisons in the Swiss Alps. Peter Huber has published four books and over 70 papers on statistics and data analysis. In addition, he has written more than a dozen papers and two books on Babylonian mathematics, astronomy and history. In 1972, he delivered the Wald lectures. He is a fellow of the IMS, of the American Association for the Advancement of Science, and of the American Academy of Arts and Sciences. In 1988 he received a Humboldt Award and in 1994 an honorary doctorate from the University of Neuch\^{a}tel. In addition to his fundamental results in robust statistics, Peter Huber made important contributions to computational statistics, strategies in data analysis, and applications of statistics in fields such as crystallography, EEGs, and human growth curves.Comment: Published in at http://dx.doi.org/10.1214/07-STS251 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Optimal lattices for sampling

    Get PDF
    The generalization of the sampling theorem to multidimensional signals is considered, with or without bandwidth constraints. The signal is modeled as a stationary random process and sampled on a lattice. Exact expressions for the mean-square error of the best linear interpolator are given in the frequency domain. Moreover, asymptotic expansions are derived for the average mean-square error when the sampling rate tends to zero and infinity, respectively. This makes it possible to determine the optimal lattices for sampling. In the low-rate sampling case, or equivalently for rough processes, the optimal lattice is the one which solves the packing problem, whereas in the high-rate sampling case, or equivalently for smooth processes, the optimal lattice is the one which solves the dual packing problem. In addition, the best linear interpolation is compared with ideal low-pass filtering (cardinal interpolation)
    corecore